Click Here!
home account info subscribe login search My ITKnowledge FAQ/help site map contact us


 
Brief Full
 Advanced
      Search
 Search Tips
To access the contents, click the chapter and section titles.

Oracle Performance Tuning and Optimization
(Publisher: Macmillan Computer Publishing)
Author(s): Edward Whalen
ISBN: 067230886x
Publication Date: 04/01/96

Bookmark It

Search this book:
 
Previous Table of Contents Next


TPC-A

The TPC-A benchmark was adopted in October 1989 as the first benchmark published by the TPC. Formerly known as the Credit/Debit benchmark, the TPC-A measures performance in an update-intensive environment typical of OLTP applications.

The TPC-A benchmark simulates a banking system. Information is stored for each account, teller, and branch in this system. During a transaction, an account is either credited or debited. The corresponding teller and branch must be updated to reflect this transaction. This benchmark is characterized by the following elements:

  Multiple online terminal sessions
  Significant disk input/output
  Moderate system and application execution time
  Transaction integrity (ACID properties)

The TPC-A benchmark employs only one update-intensive transaction type to load the system. Although this transaction type does reflect some of the workload generated by OLTP transactions, it does not reflect the entire range of OLTP environments. This lack of variance in transaction types has contributed to the obsolescence of this benchmark—but having a simple, repeatable unit of work is still useful for exercising key system components. The TPC-A became obsolete on June 6, 1995. From this date on, no more TPC-A results can be published. As of December 6, 1995, all TPC-A results were removed from the official TPC results list.

The TPC-A benchmark is designed to show entire system throughput; as a result, “terminal to terminal” performance is measured. An actual terminal device is simulated and the response time incurred in the transaction is measured. The TPC-A specification requires that 90 percent of all transactions must have a response time under 2 seconds. The number of terminals is not fixed but is a factor of the transaction rate achieved. The specification calls for 10 emulated users per tps (transactions per second) reported.

The result of the front-end processing requires that additional machines be used in the benchmark to offload the work of the data input handling. Typically, a Transaction Monitor (TM) is used to multiplex these connections.

The TPC-A benchmark can be run in a wide area network (WAN) or local area network (LAN) configuration (the performance of the two modes cannot be compared). The Full Disclosure Report must report the cost of the system including 5 years of maintenance.

The TPC-A benchmark is scaled based on the performance reported: the larger the result to be published, the larger the size of the database. A 50 tpsA database has a size of 0.5GB; a 500 tpsA database has a size of 5GB. This configuration must also include enough disk space to provide for 90 days of online disk storage at the rate measured. This requirement significantly increases the size of the system being priced.

The metrics used in the TPC-A benchmark are the throughput (tpsA) and price-per-tps. The throughput indicates the number of transactions per second with the TPC-A benchmark; the price/performance is the total system cost divided by the throughput. This second metric is designed to demonstrate the value of the system.

The TPC-A benchmark was one of the first standardized benchmarks to provide real and fair comparability between systems. The TPC-A workload is quite intensive and is still useful as a workload generator for system vendors. Because the computer industry evolves quickly, the TPC-A benchmark no longer represents today’s workloads, and as such has become obsolete.

TPC-B

The TPC-B benchmark was adopted in 1989 as a follow up to the TPC-A benchmark. Based on the TP1 benchmark or Credit-Debit benchmark, the TPC-B measures performance in an update-intensive environment as is typical of OLTP applications.

The TPC-B benchmark also simulates a banking system. Information is stored for each account, teller, and branch in this system. During a transaction, an account is either credited or debited. The corresponding teller and branch must be updated to reflect the transaction. This benchmark is characterized by the following elements:

  Significant disk input/output
  Moderate system and application execution time
  Transaction integrity (ACID properties)

The TPC-B benchmark employs only one update-intensive transaction type to load the system. Although this transaction type does reflect some of the workload generated by OLTP transactions, it does not reflect the entire range of OLTP environments. Because of the lack of emulated users, the TPC-B workload is seen as more of a stress test than an actual OLTP simulation. The TPC-B benchmark became obsolete on June 6, 1995. From that date on, no more TPC-B results can be published. As of December 6, 1995, all TPC-B results were removed from the official TPC results list.

Even though this benchmark has officially been retired, it is still used internally within the computer industry. The TPC-B benchmark is an excellent system stress test and is easily configured and run.

Although the TPC-B benchmark shares the same transaction profile and database schema with the TPC-A benchmark, the two cannot be compared. The TPC-B benchmark is a batch mode benchmark that does not simulate users as does the TPC-A benchmark.

The TPC-B benchmark is scaled based on the performance reported; the larger the result to be published, the larger the size of the database. A 50 tpsB database has a size of 0.5GB; a 500 tpsB database has a size of 5GB (just as with the TPC-A benchmark). This configuration must also include enough disk space to provide for 30 days of online disk storage at the rate measured. This requirement increases the size of the system being priced.

The metrics used in the TPC-B benchmark are the throughput (tpsB) and price-per-tps. The throughput indicates the number of transactions per second with the TPC-B benchmark; the price/performance is the total system cost divided by the throughput. This second metric is designed to demonstrate the value of the system.

The TPC-B workload is intensive and is still useful as a workload generator for system vendors. The lack of front-end terminals makes the TPC-B benchmark a much easier workload generator to set up and run. Because the TPC-B benchmark is seen as not representing today’s workloads, it has become obsolete.


Previous Table of Contents Next


Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc.
All rights reserved. Reproduction whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.